Fine-grained classification and counting of bone marrow erythroid cells are vital for evaluating the health status and formulating therapeutic schedules for leukemia or hematopathy. Due to the subtle visual differences between different types of erythroid cells, it is challenging to apply existing image-based deep learning models for fine-grained erythroid cell classification. Moreover, there is no large open-source datasets on erythroid cells to support the model training. In this paper, we introduce BMEC (Bone Morrow Erythroid Cells), the first large fine-grained image dataset of erythroid cells, to facilitate more deep learning research on erythroid cells. BMEC contains 5,666 images of individual erythroid cells, each of which is extracted from the bone marrow erythroid cell smears and professionally annotated to one of the four types of erythroid cells. To distinguish the erythroid cells, one key indicator is the cell shape which is closely related to the cell growth and maturation. Therefore, we design a novel shape-aware image classification network for fine-grained erythroid cell classification. The shape feature is extracted from the shape mask image and aggregated to the raw image feature with a shape attention module. With the shape-attended image feature, our network achieved superior classification performance (81.12\% top-1 accuracy) on the BMEC dataset comparing to the baseline methods. Ablation studies also demonstrate the effectiveness of incorporating the shape information for the fine-grained cell classification. To further verify the generalizability of our method, we tested our network on two additional public white blood cells (WBC) datasets and the results show our shape-aware method can generally outperform recent state-of-the-art works on classifying the WBC. The code and BMEC dataset can be found on https://github.com/wangye8899/BMEC.
translated by 谷歌翻译
Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates domain invariant prompt generalizable to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for inputs from both image and text modalities. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the prompt tuned on a specific domain or class also to achieve good performance on another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and four datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.
translated by 谷歌翻译
Affect understanding capability is essential for social robots to autonomously interact with a group of users in an intuitive and reciprocal way. However, the challenge of multi-person affect understanding comes from not only the accurate perception of each user's affective state (e.g., engagement) but also the recognition of the affect interplay between the members (e.g., joint engagement) that presents as complex, but subtle, nonverbal exchanges between them. Here we present a novel hybrid framework for identifying a parent-child dyad's joint engagement by combining a deep learning framework with various video augmentation techniques. Using a dataset of parent-child dyads reading storybooks together with a social robot at home, we first train RGB frame- and skeleton-based joint engagement recognition models with four video augmentation techniques (General Aug, DeepFake, CutOut, and Mixed) applied datasets to improve joint engagement classification performance. Second, we demonstrate experimental results on the use of trained models in the robot-parent-child interaction context. Third, we introduce a behavior-based metric for evaluating the learned representation of the models to investigate the model interpretability when recognizing joint engagement. This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
translated by 谷歌翻译
物流运营商最近提出了一项技术,可以帮助降低城市货运分销中的交通拥堵和运营成本,最近提出了移动包裹储物柜(MPLS)。鉴于他们能够在整个部署领域搬迁,因此他们具有提高客户可访问性和便利性的潜力。在这项研究中,我们制定了移动包裹储物柜问题(MPLP),这是位置路由问题(LRP)的特殊情况,该案例确定了整天MPL的最佳中途停留位置以及计划相应的交付路线。开发了基于混合Q学习网络的方法(HQM),以解决所得大问题实例的计算复杂性,同时逃脱了本地Optima。此外,HQM与全球和局部搜索机制集成在一起,以解决经典强化学习(RL)方法所面临的探索和剥削困境。我们检查了HQM在不同问题大小(最多200个节点)下的性能,并根据遗传算法(GA)进行了基准测试。我们的结果表明,HQM获得的平均奖励比GA高1.96倍,这表明HQM具有更好的优化能力。最后,我们确定有助于车队规模要求,旅行距离和服务延迟的关键因素。我们的发现概述了MPL的效率主要取决于时间窗口的长度和MPL中断的部署。
translated by 谷歌翻译
最近,通过深度学习框架提取动态系统的数据驱动法则在各个领域都引起了很多关注。此外,越来越多的研究工作倾向于将确定性动力学系统转移到随机动力学系统上,尤其是由非高斯乘法噪声驱动的系统。但是,对于高斯病例,许多基于原木样式的算法不能直接扩展到非高斯场景,这些场景可能存在很高的错误和低收敛问题。在这项工作中,我们克服了其中的一些挑战,并确定由$ \ alpha $稳定的l \'evy噪声驱动的随机动力系统,仅来自随机的成对数据。我们的创新包括:(1)设计一种深度学习方法,以学习l \'evy诱发的噪声的漂移和扩散系数,并在所有值中使用$ \ alpha $,(2)学习复杂的乘法噪声,而无需限制小噪声强度,(( 3)在一般输入数据假设下,即随机系统识别的端到端完整框架,即$ \ alpha $稳定的随机变量。最后,数值实验和与非本地KRAMERS-MOYAL公式与力矩生成功能的比较证实了我们方法的有效性。
translated by 谷歌翻译
隐性话语关系识别(IDRR)是话语分析中的一个具有挑战性,但重要的任务。大多数现有方法如何培训多个模型以独立预测多级标签,同时忽略分层结构标签之间的依赖。在本文中,我们将多级IDRR视为条件标签序列生成任务,并为其提出标签依赖感知序列生成模型(LDSGM)。具体而言,我们首先设计标签专注编码器,以了解输入实例的全局表示及其级别特定上下文,其中标记依赖性被集成以获取更好的标签嵌入。然后,我们使用标签序列解码器以自上而下方式输出预测标签,其中预测的更高级别标签直接用于指导当前级别的标签预测。我们进一步开发了一个相互学习的增强培训方法,以利用了基础方向上的标签依赖性,该依赖于训练期间引入的辅助解码器捕获。 PDTB数据集上的实验结果表明,我们的模型在多级IDRR上实现了最先进的性能。我们将在https://github.com/nlpersecjtu/ldsgm发布我们的代码。
translated by 谷歌翻译
随着现代深层学习技术的快速发展,动态系统和神经网络的研究越来越多地利用了很多不同的方式。由于在现实世界观察中经常出现不确定性,因此SDES(随机微分方程)来发挥重要作用。更具体地,在本文中,我们使用配备神经网络的SDE集合来预测具有大跳跃性能和高概率分布偏移的嘈杂时间序列的长期趋势。我们的贡献是,首先,我们使用相位空间重建方法来提取时间序列数据的内在尺寸,以确定我们预测模型的输入结构。其次,我们探索由$ \ alpha $ -stable l \'evy motion驱动的SDE来模拟时间序列数据,通过神经网络近似来解决问题。第三,我们构建了达到多时间步长预测的注意机制。最后,我们通过将其应用于股票营销时间序列预测并显示结果优于几个基线深度学习模型来说明我们的方法。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译